working group
Video game firms found to have broken own UK industry rules on loot boxes
The UK government's decision to let technology companies self-regulate gambling-style loot boxes in video games has been called into question, after some of the developers put in charge of new industry guidelines broke their own rules. In the past six months, the advertising regulator has upheld complaints against three companies involved in drawing up industry rules, including the leading developer Electronic Arts (EA), for failing to disclose that their games contained loot boxes. An expert who submitted the complaints said he had found hundreds more examples of breaches but had only taken a handful to the Advertising Standards Authority (ASA) in order to highlight the problem. Loot boxes are in-game features that allow players to pay, with real money or virtual currency, to open a digital envelope containing random prizes, such as an outfit or a weapon for a character. Despite warnings from experts that loot boxes carry similar risks to gambling, the then Department for Digital, Culture, Media and Sport said in July 2022 it would not follow other countries, such as Belgium, in choosing to regulate them as gambling products.
- Europe > United Kingdom (0.51)
- Europe > Belgium (0.25)
- Europe > Denmark > Capital Region > Copenhagen (0.05)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.51)
- Information Technology > Artificial Intelligence > Games (0.66)
- Information Technology > Communications > Social Media (0.56)
House Democrats launch 'working group' on artificial intelligence
Fox News correspondent Gillian Turner has the latest on the president's focus amid calls for an impeachment inquiry on'Special Report.' House Democrats are launching a working group aimed at crafting artificial intelligence policy, the latest attempt by federal lawmakers to wrap their heads around legislating the rapidly-advancing sector. The New Democrat Coalition, a group of nearly 100 House Democrats that touts itself as "pragmatic," unveiled the new initiative this week. Rep. Don Beyer, D-Va., one of the initiative's vice chairs, told Fox News Digital he hopes the working group will "help develop real, practicable ideas that will put guardrails in place for AI. "I continue to be focused on a variety of areas related to AI, including safety and security, transparency, the future of work, preventing civil rights abuses, health care and suicide prevention, and more, and have discussions ongoing about legislation in these areas with members of both parties," Beyer said. "Congress has to get up to speed on this issue, and I think the New Dems' AI working group will be a constructive setting for progress." The Biden administration and Congress are examining how to regulate AI. Working group Chair Rep. Derek Kilmer, D-Wash., suggested it could lay the groundwork for an AI regulatory framework in the House of Representatives. "We are already seeing how breakthroughs in this emerging technology present both great opportunities and challenges with potential disruptions for workers, for democracy, and for national security," Kilmer said. "As AI's applications expand and change, it is incumbent on lawmakers to address its unique opportunities and challenges by creating a regulatory framework that both encourages growth while guarding against potential risks." WHAT IS ARTIFICIAL INTELLIGENCE (AI)? Rep. Seth Moulton, D-Mass., another member of the working group and a Marine veteran, said he was concerned with how AI would "transform warfare" and called on Congress to put up responsible guardrails against the technology's most devastating possibilities. "It's going to be impossible for Congress to really stay ahead of AI, but what we can and should do is to take very seriously AI's most dangerous use cases and develop solutions and safeguards that apply directly to those cases," Moulton told Fox News Digital. "I'm also particularly concerned about how AI will transform warfare.
- North America > United States (0.72)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.05)
- Asia > China > Beijing > Beijing (0.05)
- Media > News (1.00)
- Law > Statutes (0.92)
- Government > Regional Government > North America Government > United States Government (0.72)
Practical Machine Learning in R: Nwanganga, Fred, Chapple, Mike + Free Shipping
Mike Chapple is Teaching Professor of IT, Analytics, and Operations at the University of Notre Dame's Mendoza College of Business where he teaches graduate and undergraduate courses in cybersecurity and business analytics. Prior to joining Notre Dame's faculty, Mike served as Senior Director for IT Service Delivery at the University. In this role, he oversaw the information security, IT compliance, cloud computing, data governance, IT architecture, learning platforms, project management, strategic planning and product management functions for the Office of Information Technologies. Mike led Notre Dame's Cloud First strategy which moved 80% of the institution's IT services into the cloud over three years. Mike previously served as Senior Advisor to the Executive Vice President at Notre Dame for two years.
- North America > United States > Indiana > St. Joseph County > Notre Dame (0.07)
- North America > United States > Idaho (0.07)
- Information Technology > Security & Privacy (1.00)
- Education > Educational Setting > Higher Education (0.42)
- Government > Military > Cyberwarfare (0.39)
Setting the standard for AI in dermatology - AIMed
Dr. Rubeta Matin, NHS Consultant Dermatologist, reveals the challenges of setting up a new national skin database to support the development of dermatological AI in the UK It's common knowledge that the chances of survival increase dramatically if melanoma is detected and treated early. However, many algorithm-based applications that claim to identify potentially dangerous-looking pigment on the skin have not been formally and appropriately validated in intervention studies. There are also not many systematic and rigorous reviews to discover the true accuracy of these skin cancer diagnosing algorithms, especially those that were tested in an artificial research setting that may not be representative of the real world. It's reasons like these that drive dermatologists to question whether the false assurance given by these applications may delay individuals from seeking medical advice. Last February, a new study published in the BMJ revealed mobile applications that assess the risks of suspicious moles may not be reliable enough to detect all forms of skin cancer.
- North America > United States (0.16)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Skin Cancer (0.79)
- Information Technology > Communications > Mobile (0.36)
- Information Technology > Artificial Intelligence > Applied AI (0.31)
UC creates recommendations for responsible use of artificial intelligence
The University of California has created recommendations to create a path toward the responsible use of artificial intelligence in future UC endeavors. UC's increasing dependence on the use of AI has increased its overall productivity as an institution, according to the UC Office of the President, or UCOP. However, with the implementation of AI, there is also potential for problems to arise. To combat this, former UC President Janet Napolitano and current president Michael Drake created the Presidential Working Group on Artificial Intelligence, or the Working Group, in August 2020. The Working Group's final report noted that the group consists of 32 faculty and staff from all 10 UC campuses and an additional number of representatives from UC Legal and the Office of Ethics, Compliance and Audit Services, among other groups.
- Law (0.72)
- Information Technology > Security & Privacy (0.34)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.33)
ENISA AI Threat Landscape Report Unveils Major Cybersecurity Challenges
Today, the European Union Agency for Cybersecurity (ENISA) released its Artificial Intelligence Threat Landscape Report, unveiling the major cybersecurity challenges facing the AI ecosystem. ENISA's study takes a methodological approach at mapping the key players and threats in AI. The report follows up the priorities defined in the European Commission's 2020 AI White Paper. The ENISA Ad-Hoc Working Group on Artificial Intelligence Cybersecurity, with members from EU Institutions, academia and industry, provided input and supported the drafting of this report. The benefits of this emerging technology are significant, but so are the concerns, such as potential new avenues of manipulation and attack methods.
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (1.00)
- Government > Military > Cyberwarfare (1.00)
IAB AI Working Group to Establish Artificial Intelligence Standards
The Interactive Advertising Bureau (IAB), the national trade association for the digital media and marketing industries, is focusing its AI Standards Working Group to develop artificial intelligence (AI) standards, best practices, use cases, and terminologies in an effort to scale AI and enable the industry on its full potential. The group is newly co-chaired by IBM Watson Advertising and Nielsen. The first release of 2021, "Artificial Intelligence Use Cases and Best Practices for Marketing," will help executive leaders, marketers, and technologists get the most from AI, and do it responsibly. Created for those already working with AI or looking to leverage it in their business, this guide draws directly from the real-world experience of co-chairs IBM Watson Advertising and Nielsen as well as top publishers, agencies, and ad tech companies in the industry. It's not an ivory tower overview of AI: it's a specific guide for what to do for executives that are in the thick of it.
- North America > United States > New York (0.06)
- North America > United States > District of Columbia > Washington (0.05)
- Information Technology (1.00)
- Media > News (0.40)
A Decentralized Approach Towards Responsible AI in Social Ecosystems
For AI technology to fulfill its full promises, we must design effective mechanisms into the AI systems to support responsible AI behavior and curtail potential irresponsible use, e.g. in areas of privacy protection, human autonomy, robustness, and prevention of biases and discrimination in automated decision making. In this paper, we present a framework that provides computational facilities for parties in a social ecosystem to produce the desired responsible AI behaviors. To achieve this goal, we analyze AI systems at the architecture level and propose two decentralized cryptographic mechanisms for an AI system architecture: (1) using Autonomous Identity to empower human users, and (2) automating rules and adopting conventions within social institutions. We then propose a decentralized approach and outline the key concepts and mechanisms based on Decentralized Identifier (DID) and Verifiable Credentials (VC) for a general-purpose computational infrastructure to realize these mechanisms. We argue the case that a decentralized approach is the most promising path towards Responsible AI from both the computer science and social science perspectives.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (0.93)
VA's AI 'to-go' delivery model is morphing into a platform - FedScoop
Interest in an artificial intelligence "to-go" delivery model is building with more than a dozen Department of Veterans Affairs sites looking to pilot modules, the agency's head of AI said Thursday. VA developed the initial module to assist its medical centers with COVID-19 individual risk prediction, but its hundreds of centers and thousands of facilities have other uses for the statistical models being tested, said Gil Alterovitz, VA's director of AI. Additional use cases haven't been chosen, but AI models will be packaged as embeddable software add-ons for rapid deployment based on the original. "We're now using that to generalize and essentially created a new platform so that artificial intelligence research and development can be added as modules in the future," Alterovitz said during day three of FedTalks, presented by FedScoop. Once the AI technology and health application have been vetted, any medical center will be able to access a module when VA shares a secure, internal weblink.
- Health & Medicine > Health Care Providers & Services (1.00)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.58)
FSC Korea Kicks Off Working Group on Artificial Intelligence
The working group will seek to promote the use of AI adoption in financial services as part of the government's'New Deal' policy initiative. South Korea's FSC (Financial Services Commission) on Thursday (16 July) held a kick-off meeting of a new working group tasked with promoting the use of artificial intelligence (AI) technology in financial services. "AI technology can help improve the effectiveness, inclusiveness and accountability while lowering costs in providing financial services," the FSC said, pointing to credit scoring, loan assessment, insurance and asset management as service areas that will benefit. The move to promote AI adoption in financial services is part of the government's KRW 114 trillion'New Deal' policy initiative, which centres on creating more tech sector jobs and promoting digitalisation of industries as a new post-coronavirus growth engine. At the meeting, participants discussed major trends and policies surrounding the application of AI in financial services, the working group's plans to promote the sector.